8 - Deep Learning [ID:9828]
50 von 607 angezeigt

To deconstruct the other side you have enough cables to actually perform the

lecture. Okay so sorry for being slightly late but we are well in time that we can

still start with the lecture so welcome everybody back to our deep learning

lecture and today we want to continue our journey through the realm of deep

learning and we will now talk about recurrent neural networks. So today we

want to look into sequence classification and sequence processing.

Well we have a short motivation then talk about simple recurrent networks

followed by long short-term memory units, gated recurrent units and then we

compare them and discuss a couple of sampling strategies and look into some

examples. So let's start with the motivation. So why are we interested in

recurrent neural networks? Well so far we had one input let's say one image and we

had one result that's it. This is all what we have been doing so far and we

had these feed-forward neural networks where you have the the input some

processing and then you get your result. But there's lots of sequential and

time-dependent signals so let's say speech, music, videos, other sensor data

that is recorded over time and there you could have like temperature, energy

consumption and so on and in these cases you don't have just one

input and one output but you have a sequence of inputs and a sequence of

outputs. So and then you could look at snapshots obviously and they may be not

as informative. So for example you can also consider the task of translation.

Let's say you want to translate English to German and obviously you can do that

word by word but if you try to do that then you will very quickly notice that

you need context. You need the surrounding words around one word such

that you can make a better translation and if you can't do that you probably

have noticed that in dictionaries you often find several translations for one

word and then there's some explanations on them how they are used in context and

obviously you also need to know the sentence structure so you need to know

are you talking about a verb or noun and so on because they may have the same

spelling but a very different meaning. So the temporal context is important and

the next question is how can we now integrate this into the context of a

network? How would you be able to do that? So you could have a simple approach. You

could just feed the whole sequence into the big network and you just take the

sequence feed it into the network and then you try to predict other sequences

and that's actually not so a super useful idea because first of all you

have inefficient memory usage. It may be difficult or impossible to train because

you don't know how to establish the different relations and it would be

difficult then to distinguish between spatial and temporal differences or

spatial and temporal dimensions. Actually we claim here that it's a bad idea but

there is actually quite recent results here if you follow this link down here

that you can even do a translation machine or machine translation. You can

even build that with convolutional neural networks. So you can even build

that with CNNs and there's actually quite a bit of literature out there right

now that is also employing just convolutional networks in order to

perform tasks like these. Okay but for the time being we will stick with the

recurrent networks and we will look into them into some more detail

and you will also see that the nice thing is as you use the recurrent

network you can shift it piece by piece and then also aim at real-time

translation or real-time transcription. So the better approach here would be to

model the sequential behavior within the architecture and this then gives rise to

the recurrent neural networks. Okay so let's start with simple recurrent

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:00:00 Min

Aufnahmedatum

2018-12-04

Hochgeladen am

2019-04-12 15:22:35

Sprache

en-US

Deep Learning (DL) has attracted much interest in a wide range of applications such as image recognition, speech recognition and artificial intelligence, both from academia and industry. This lecture introduces the core elements of neural networks and deep learning, it comprises:

  • (multilayer) perceptron, backpropagation, fully connected neural networks

  • loss functions and optimization strategies

  • convolutional neural networks (CNNs)

  • activation functions

  • regularization strategies

  • common practices for training and evaluating neural networks

  • visualization of networks and results

  • common architectures, such as LeNet, Alexnet, VGG, GoogleNet

  • recurrent neural networks (RNN, TBPTT, LSTM, GRU)

  • deep reinforcement learning

  • unsupervised learning (autoencoder, RBM, DBM, VAE)

  • generative adversarial networks (GANs)

  • weakly supervised learning

  • applications of deep learning (segmentation, object detection, speech recognition, ...)

Tags

backpropagation neural gradient network recurrent RNN BPTT long short term LSTM GRU
Einbetten
Wordpress FAU Plugin
iFrame
Teilen